-
Is it possible that so-called “Artificial intelligence” cannot define things? This is just a thought that came to mind.
-
Even if it’s not explicitly stated, humans can recognize things because they have definitions for them.
- Things like apples, desks, cells, addition, etc.
- In a philosophical sense, could this be equivalent to an “Idea”? (I’m not well-versed in philosophy, so this is just a guess.)
-
Or maybe the order is reversed?
- Does it become a chicken or the egg problem?
-
For example, in the case of Machine Learning, is it because humans define the Loss Function that it can learn?